4 research outputs found

    Snake Robots for Surgical Applications: A Review

    Get PDF
    Although substantial advancements have been achieved in robot-assisted surgery, the blueprint to existing snake robotics predominantly focuses on the preliminary structural design, control, and human–robot interfaces, with features which have not been particularly explored in the literature. This paper aims to conduct a review of planning and operation concepts of hyper-redundant serpentine robots for surgical use, as well as any future challenges and solutions for better manipulation. Current researchers in the field of the manufacture and navigation of snake robots have faced issues, such as a low dexterity of the end-effectors around delicate organs, state estimation and the lack of depth perception on two-dimensional screens. A wide range of robots have been analysed, such as the i2Snake robot, inspiring the use of force and position feedback, visual servoing and augmented reality (AR). We present the types of actuation methods, robot kinematics, dynamics, sensing, and prospects of AR integration in snake robots, whilst addressing their shortcomings to facilitate the surgeon’s task. For a smoother gait control, validation and optimization algorithms such as deep learning databases are examined to mitigate redundancy in module linkage backlash and accidental self-collision. In essence, we aim to provide an outlook on robot configurations during motion by enhancing their material compositions within anatomical biocompatibility standards

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Optimized Deep Learning Model for Predicting Tumor Location in Medical Images for Robotic Trajectory Mapping

    No full text
    Recent studies show that pre-operative target localization has an increased efficiency of up to 93,3% [1], with the ability to detect, retrieve and generate specific anatomical landmarks from the medical image datasets. Despite a plethora of advances in the field of medical image registration, researchers are still confronted with issues such as label correspondence in sequences, high computational burden as well as background noise during signal acquisition. A simplified method proposed in [1] utilizes an appropriate transformation vector to achieve a quasi-optimized moving-image registration procedure from real chest scans. The extended framework adapted from the DeepReg architecture enabled the mapping of warped moving labels onto their fixed counterparts, hence earmarking the collision-free zones around the phrenic nerve and innominate vein. In their raw form, a deep neural network is used to process segmented masks as binary pixelated images, compress them and extract the areas of interest from the filtered image from erosion, blurring and uneven contrasts. Despite their high rates of accuracy recorded, there is a diagnostic call to improve the decision based network for artificial intelligence (AI) driven classification and localization of moving tumors. Several authors have focused on performing such transfer learning (TL) processes with CNN models such as the AlexNet, DenseNet, Residual Network (ResNet) and Residual Network 50 version 2 (ResNet50v2). Following this method, authors such as Islam et al. [2] developed a new and improved DL method to detect and diagnose COVID-19 from X-ray images with a combination of CNNs and a long short term memory (LSTM). The proposed method utilised a novel adversarial loss for high-accuracy marker localization between warped moving and fixed images with respect to ground truth voxels. The advantage behind this method is that ground truth image alignment is not required due to the inherent use of images instead of image pairs. In terms of unsupervised learning method, works by De Backer et al [3] resonate the most, with the closest resemblance to our experiment. However, due to certain issues in efficiency in reconstruction and neural network accuracy, we deem our method to be most feasible in image-guided surgery for trajectory mapping via tumor localization

    Augmented Reality Applications for Image-Guided Robotic Interventions using deep learning algorithms

    No full text
    A significant breakthrough in the field of surgery has seen the integration of augmented reality (AR) in standard robot operations, allowing anatomical objects to be digitalized and overlaid onto a real-life scenario during or pre-intervention. This paper provides an overview of the methodology used to reconstruct and register laparoscopic head and neck image sequences for an AR tool. Deep learning (DL) algorithms are designed to strategically place fiducial markers or labels in a dataset, hence enabling a virtual tool path to be set up for guiding the end effector of a robot. We introduce a dataset of 271 images of patients from four different clinics in Quebec with a proven history of head-and-neck cancer. We then propose a marker-based registration method for mapping a trajectory during surgery, utilizing an unsupervised neural network for computing the medical image transformations. During the training stage, we use an optimized convolutional neural network (CNN) that warps a set of labels from the moving image in contrast with the related counterparts in the fixed image. To this end, we compare the loss functions between warped moving labels and fixed labels with respect to the ground truth in the method. Next, we propose a UNet architecture where we measure the accuracies in label localization throughout the test sequences relative to the initial output results. Our experiments showed that the UNet outperformed the initial CNN architecture, with optimum performance outcomes in losses being closer to 1.0
    corecore